13 research outputs found
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
In this paper, we present UNet++, a new, more powerful architecture for
medical image segmentation. Our architecture is essentially a deeply-supervised
encoder-decoder network where the encoder and decoder sub-networks are
connected through a series of nested, dense skip pathways. The re-designed skip
pathways aim at reducing the semantic gap between the feature maps of the
encoder and decoder sub-networks. We argue that the optimizer would deal with
an easier learning task when the feature maps from the decoder and encoder
networks are semantically similar. We have evaluated UNet++ in comparison with
U-Net and wide U-Net architectures across multiple medical image segmentation
tasks: nodule segmentation in the low-dose CT scans of chest, nuclei
segmentation in the microscopy images, liver segmentation in abdominal CT
scans, and polyp segmentation in colonoscopy videos. Our experiments
demonstrate that UNet++ with deep supervision achieves an average IoU gain of
3.9 and 3.4 points over U-Net and wide U-Net, respectively.Comment: 8 pages, 3 figures, 3 tables, accepted by 4th Deep Learning in
Medical Image Analysis (DLMIA) Worksho
Automated segmentation of intracranial hemorrhages from 3D CT
Intracranial hemorrhage segmentation challenge (INSTANCE 2022) offers a
platform for researchers to compare their solutions to segmentation of
hemorrhage stroke regions from 3D CTs. In this work, we describe our solution
to INSTANCE 2022. We use a 2D segmentation network, SegResNet from MONAI,
operating slice-wise without resampling. The final submission is an ensemble of
18 models. Our solution (team name NVAUTO) achieves the top place in terms of
Dice metric (0.721), and overall rank 2. It is implemented with Auto3DSeg.Comment: INSTANCE22 challenge report, MICCAI2022. arXiv admin note:
substantial text overlap with arXiv:2209.0954
Automated head and neck tumor segmentation from 3D PET/CT
Head and neck tumor segmentation challenge (HECKTOR) 2022 offers a platform
for researchers to compare their solutions to segmentation of tumors and lymph
nodes from 3D CT and PET images. In this work, we describe our solution to
HECKTOR 2022 segmentation task. We re-sample all images to a common resolution,
crop around head and neck region, and train SegResNet semantic segmentation
network from MONAI. We use 5-fold cross validation to select best model
checkpoints. The final submission is an ensemble of 15 models from 3 runs. Our
solution (team name NVAUTO) achieves the 1st place on the HECKTOR22 challenge
leaderboard with an aggregated dice score of 0.78802.Comment: HECKTOR22 segmentation challenge. MICCAI 2022. arXiv admin note: text
overlap with arXiv:2209.0954
Pseudo Supervised Metrics: Evaluating Unsupervised Image to Image Translation Models In Unsupervised Cross-Domain Classification Frameworks
The ability to classify images accurately and efficiently is dependent on
having access to large labeled datasets and testing on data from the same
domain that the model is trained on. Classification becomes more challenging
when dealing with new data from a different domain, where collecting a large
labeled dataset and training a new classifier from scratch is time-consuming,
expensive, and sometimes infeasible or impossible. Cross-domain classification
frameworks were developed to handle this data domain shift problem by utilizing
unsupervised image-to-image (UI2I) translation models to translate an input
image from the unlabeled domain to the labeled domain. The problem with these
unsupervised models lies in their unsupervised nature. For lack of annotations,
it is not possible to use the traditional supervised metrics to evaluate these
translation models to pick the best-saved checkpoint model. In this paper, we
introduce a new method called Pseudo Supervised Metrics that was designed
specifically to support cross-domain classification applications contrary to
other typically used metrics such as the FID which was designed to evaluate the
model in terms of the quality of the generated image from a human-eye
perspective. We show that our metric not only outperforms unsupervised metrics
such as the FID, but is also highly correlated with the true supervised
metrics, robust, and explainable. Furthermore, we demonstrate that it can be
used as a standard metric for future research in this field by applying it to a
critical real-world problem (the boiling crisis problem).Comment: arXiv admin note: text overlap with arXiv:2212.0910
A Generalized Framework for Critical Heat Flux Detection Using Unsupervised Image-to-Image Translation
This work proposes a framework developed to generalize Critical Heat Flux
(CHF) detection classification models using an Unsupervised Image-to-Image
(UI2I) translation model. The framework enables a typical classification model
that was trained and tested on boiling images from domain A to predict boiling
images coming from domain B that was never seen by the classification model.
This is done by using the UI2I model to transform the domain B images to look
like domain A images that the classification model is familiar with. Although
CNN was used as the classification model and Fixed-Point GAN (FP-GAN) was used
as the UI2I model, the framework is model agnostic. Meaning, that the framework
can generalize any image classification model type, making it applicable to a
variety of similar applications and not limited to the boiling crisis detection
problem. It also means that the more the UI2I models advance, the better the
performance of the framework.Comment: This work has been submitted to the Expert Systems With Applications
Journal on Sep 25, 202
Brainomaly: Unsupervised Neurologic Disease Detection Utilizing Unannotated T1-weighted Brain MR Images
Harnessing the power of deep neural networks in the medical imaging domain is
challenging due to the difficulties in acquiring large annotated datasets,
especially for rare diseases, which involve high costs, time, and effort for
annotation. Unsupervised disease detection methods, such as anomaly detection,
can significantly reduce human effort in these scenarios. While anomaly
detection typically focuses on learning from images of healthy subjects only,
real-world situations often present unannotated datasets with a mixture of
healthy and diseased subjects. Recent studies have demonstrated that utilizing
such unannotated images can improve unsupervised disease and anomaly detection.
However, these methods do not utilize knowledge specific to registered
neuroimages, resulting in a subpar performance in neurologic disease detection.
To address this limitation, we propose Brainomaly, a GAN-based image-to-image
translation method specifically designed for neurologic disease detection.
Brainomaly not only offers tailored image-to-image translation suitable for
neuroimages but also leverages unannotated mixed images to achieve superior
neurologic disease detection. Additionally, we address the issue of model
selection for inference without annotated samples by proposing a pseudo-AUC
metric, further enhancing Brainomaly's detection performance. Extensive
experiments and ablation studies demonstrate that Brainomaly outperforms
existing state-of-the-art unsupervised disease and anomaly detection methods by
significant margins in Alzheimer's disease detection using a publicly available
dataset and headache detection using an institutional dataset. The code is
available from https://github.com/mahfuzmohammad/Brainomaly.Comment: Accepted in WACV 202
Fetal Brain Tissue Annotation and Segmentation Challenge Results
In-utero fetal MRI is emerging as an important tool in the diagnosis and
analysis of the developing human brain. Automatic segmentation of the
developing fetal brain is a vital step in the quantitative analysis of prenatal
neurodevelopment both in the research and clinical context. However, manual
segmentation of cerebral structures is time-consuming and prone to error and
inter-observer variability. Therefore, we organized the Fetal Tissue Annotation
(FeTA) Challenge in 2021 in order to encourage the development of automatic
segmentation algorithms on an international level. The challenge utilized FeTA
Dataset, an open dataset of fetal brain MRI reconstructions segmented into
seven different tissues (external cerebrospinal fluid, grey matter, white
matter, ventricles, cerebellum, brainstem, deep grey matter). 20 international
teams participated in this challenge, submitting a total of 21 algorithms for
evaluation. In this paper, we provide a detailed analysis of the results from
both a technical and clinical perspective. All participants relied on deep
learning methods, mainly U-Nets, with some variability present in the network
architecture, optimization, and image pre- and post-processing. The majority of
teams used existing medical imaging deep learning frameworks. The main
differences between the submissions were the fine tuning done during training,
and the specific pre- and post-processing steps performed. The challenge
results showed that almost all submissions performed similarly. Four of the top
five teams used ensemble learning methods. However, one team's algorithm
performed significantly superior to the other submissions, and consisted of an
asymmetrical U-Net network architecture. This paper provides a first of its
kind benchmark for future automatic multi-tissue segmentation algorithms for
the developing human brain in utero.Comment: Results from FeTA Challenge 2021, held at MICCAI; Manuscript
submitte